Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
PLoS One ; 18(5): e0285703, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37195925

RESUMO

Sleep is an important indicator of a person's health, and its accurate and cost-effective quantification is of great value in healthcare. The gold standard for sleep assessment and the clinical diagnosis of sleep disorders is polysomnography (PSG). However, PSG requires an overnight clinic visit and trained technicians to score the obtained multimodality data. Wrist-worn consumer devices, such as smartwatches, are a promising alternative to PSG because of their small form factor, continuous monitoring capability, and popularity. Unlike PSG, however, wearables-derived data are noisier and far less information-rich because of the fewer number of modalities and less accurate measurements due to their small form factor. Given these challenges, most consumer devices perform two-stage (i.e., sleep-wake) classification, which is inadequate for deep insights into a person's sleep health. The challenging multi-class (three, four, or five-class) staging of sleep using data from wrist-worn wearables remains unresolved. The difference in the data quality between consumer-grade wearables and lab-grade clinical equipment is the motivation behind this study. In this paper, we present an artificial intelligence (AI) technique termed sequence-to-sequence LSTM for automated mobile sleep staging (SLAMSS), which can perform three-class (wake, NREM, REM) and four-class (wake, light, deep, REM) sleep classification from activity (i.e., wrist-accelerometry-derived locomotion) and two coarse heart rate measures-both of which can be reliably obtained from a consumer-grade wrist-wearable device. Our method relies on raw time-series datasets and obviates the need for manual feature selection. We validated our model using actigraphy and coarse heart rate data from two independent study populations: the Multi-Ethnic Study of Atherosclerosis (MESA; N = 808) cohort and the Osteoporotic Fractures in Men (MrOS; N = 817) cohort. SLAMSS achieves an overall accuracy of 79%, weighted F1 score of 0.80, 77% sensitivity, and 89% specificity for three-class sleep staging and an overall accuracy of 70-72%, weighted F1 score of 0.72-0.73, 64-66% sensitivity, and 89-90% specificity for four-class sleep staging in the MESA cohort. It yielded an overall accuracy of 77%, weighted F1 score of 0.77, 74% sensitivity, and 88% specificity for three-class sleep staging and an overall accuracy of 68-69%, weighted F1 score of 0.68-0.69, 60-63% sensitivity, and 88-89% specificity for four-class sleep staging in the MrOS cohort. These results were achieved with feature-poor inputs with a low temporal resolution. In addition, we extended our three-class staging model to an unrelated Apple Watch dataset. Importantly, SLAMSS predicts the duration of each sleep stage with high accuracy. This is especially significant for four-class sleep staging, where deep sleep is severely underrepresented. We show that, by appropriately choosing the loss function to address the inherent class imbalance, our method can accurately estimate deep sleep time (SLAMSS/MESA: 0.61±0.69 hours, PSG/MESA ground truth: 0.60±0.60 hours; SLAMSS/MrOS: 0.53±0.66 hours, PSG/MrOS ground truth: 0.55±0.57 hours;). Deep sleep quality and quantity are vital metrics and early indicators for a number of diseases. Our method, which enables accurate deep sleep estimation from wearables-derived data, is therefore promising for a variety of clinical applications requiring long-term deep sleep monitoring.


Assuntos
Actigrafia , Inteligência Artificial , Masculino , Humanos , Frequência Cardíaca/fisiologia , Sono/fisiologia , Fases do Sono/fisiologia , Fatores de Tempo , Reprodutibilidade dos Testes
2.
Neuroimage ; 237: 118126, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33957234

RESUMO

Tau neurofibrillary tangles, a pathophysiological hallmark of Alzheimer's disease (AD), exhibit a stereotypical spatiotemporal trajectory that is strongly correlated with disease progression and cognitive decline. Personalized prediction of tau progression is, therefore, vital for the early diagnosis and prognosis of AD. Evidence from both animal and human studies is suggestive of tau transmission along the brains preexisting neural connectivity conduits. We present here an analytic graph diffusion framework for individualized predictive modeling of tau progression along the structural connectome. To account for physiological processes that lead to active generation and clearance of tau alongside passive diffusion, our model uses an inhomogenous graph diffusion equation with a source term and provides closed-form solutions to this equation for linear and exponential source functionals. Longitudinal imaging data from two cohorts, the Harvard Aging Brain Study (HABS) and the Alzheimer's Disease Neuroimaging Initiative (ADNI), were used to validate the model. The clinical data used for developing and validating the model include regional tau measures extracted from longitudinal positron emission tomography (PET) scans based on the 18F-Flortaucipir radiotracer and individual structural connectivity maps computed from diffusion tensor imaging (DTI) by means of tractography and streamline counting. Two-timepoint tau PET scans were used to assess the goodness of model fit. Three-timepoint tau PET scans were used to assess predictive accuracy via comparison of predicted and observed tau measures at the third timepoint. Our results show high consistency between predicted and observed tau and differential tau from region-based analysis. While the prognostic value of this approach needs to be validated in a larger cohort, our preliminary results suggest that our longitudinal predictive model, which offers an in vivo macroscopic perspective on tau progression in the brain, is potentially promising as a personalizable predictive framework for AD.


Assuntos
Doença de Alzheimer , Imagem de Tensor de Difusão , Progressão da Doença , Modelos Neurológicos , Rede Nervosa , Tomografia por Emissão de Pósitrons , Proteínas tau/metabolismo , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/metabolismo , Doença de Alzheimer/patologia , Conjuntos de Dados como Assunto , Feminino , Humanos , Estudos Longitudinais , Masculino , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/metabolismo , Rede Nervosa/patologia , Prognóstico
3.
Med Image Comput Comput Assist Interv ; 12267: 418-427, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33263115

RESUMO

Tau tangles are a pathophysiological hallmark of Alzheimer's disease (AD) and exhibit a stereotypical pattern of spatiotemporal spread which has strong links to disease progression and cognitive decline. Preclinical evidence suggests that tau spread depends on neuronal connectivity rather than physical proximity between different brain regions. Here, we present a novel physics-informed geometric learning model for predicting tau buildup and spread that learns patterns directly from longitudinal tau imaging data while receiving guidance from governing physical principles. Implemented as a graph neural network with physics-based regularization in latent space, the model enables effective training with smaller data sizes. For training and validation of the model, we used longitudinal tau measures from positron emission tomography (PET) and structural connectivity graphs from diffusion tensor imaging (DTI) from the Harvard Aging Brain Study. The model led to higher peak signal-to-noise ratio and lower mean squared error levels than both an unregularized graph neural network and a differential equation solver. The method was validated using both two-timepoint and three-timepoint tau PET measures. The effectiveness of the approach was further confirmed by a cross-validation study.

4.
IEEE Trans Comput Imaging ; 6: 518-528, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32055649

RESUMO

Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).

5.
Neural Netw ; 125: 83-91, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32078963

RESUMO

The intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído
6.
IEEE Trans Comput Imaging ; 5(4): 530-539, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31723575

RESUMO

The intrinsically limited spatial resolution of PET confounds image quantitation. This paper presents an image deblurring and super-resolution framework for PET using anatomical guidance provided by high-resolution MR images. The framework relies on image-domain post-processing of already-reconstructed PET images by means of spatially-variant deconvolution stabilized by an MR-based joint entropy penalty function. The method is validated through simulation studies based on the BrainWeb digital phantom, experimental studies based on the Hoffman phantom, and clinical neuroimaging studies pertaining to aging and Alzheimer's disease. The developed technique was compared with direct deconvolution and deconvolution stabilized by a quadratic difference penalty, a total variation penalty, and a Bowsher penalty. The BrainWeb simulation study showed improved image quality and quantitative accuracy measured by contrast-to-noise ratio, structural similarity index, root-mean-square error, and peak signal-to-noise ratio generated by this technique. The Hoffman phantom study indicated noticeable improvement in the structural similarity index (relative to the MR image) and gray-to-white contrast-to-noise ratio. Finally, clinical amyloid and tau imaging studies for Alzheimer's disease showed lowering of the coefficient of variation in several key brain regions associated with two target pathologies.

7.
J Med Imaging (Bellingham) ; 6(2): 024004, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31065568

RESUMO

Positron emission tomography (PET) imaging of the lungs is confounded by respiratory motion-induced blurring artifacts that degrade quantitative accuracy. Gating and motion-compensated image reconstruction are frequently used to correct these motion artifacts in PET. In the absence of voxel-by-voxel deformation measures, surrogate signals from external markers are used to track internal motion and generate gated PET images. The objective of our work is to develop a group-level parcellation framework for the lungs to guide the placement of markers depending on the location of the internal target region. We present a data-driven framework based on higher-order singular value decomposition (HOSVD) of deformation tensors that enables identification of synchronous areas inside the torso and on the skin surface. Four-dimensional (4-D) magnetic resonance (MR) imaging based on a specialized radial pulse sequence with a one-dimensional slice-projection navigator was used for motion capture under free-breathing conditions. The deformation tensors were computed by nonrigidly registering the gated MR images. Group-level motion signatures obtained via HOSVD were used to cluster the voxels both inside the volume and on the surface. To characterize the parcellation result, we computed correlation measures across the different regions of interest (ROIs). To assess the robustness of the parcellation technique, leave-one-out cross-validation was performed over the subject cohort, and the dependence of the result on varying numbers of gates and singular value thresholds was examined. Overall, the parcellation results were largely consistent across these test cases with Jaccard indices reflecting high degrees of overlap. Finally, a PET simulation study was performed which showed that, depending on the location of the lesion, the selection of a synchronous ROI may lead to noticeable gains in the recovery coefficient. Accurate quantitative interpretation of PET images is important for lung cancer management. Therefore, a guided motion monitoring approach is of utmost importance in the context of pulmonary PET imaging.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...